143 research outputs found

    Identification of Proteins that Contribute to Yeast Heat Stress by Lysine Acetylation

    Get PDF
    Evidence is emerging that protein lysine acetylation may be a novel type of post-translational modification (PTM) contributing to the mechanisms of yeast heat stress responses. Proteomics studies including ours have identified over 1,000 acetylated proteins in the yeast proteomes that are composed of about 6,000 proteins. Our lab recently identified 596 proteins that underwent acetylation changes during heat shock by mass spectrometry. However, the role of lysine acetylation on specific residues of specific proteins in yeast thermotolerance remains largely unknown. This study selected 43 proteins from our lab’s previous work and examined their possible contributions to yeast heat stress responses. We found that knockout of 32 genes caused a growth defect in yeasts at 40 °C, suggesting these proteins are required for yeast innate thermotolerance. Among these 32 proteins, knockout of 5 genes including rpl31a, sin3, aco1, adh1 and pfk2 almost completed inhibited yeast growth at 40 °C, suggesting they are ideal candidates for further studies. Site-directed mutagenesis method was employed to replace the lysine K638 in Aco1p protein for the purpose of mimicking different states of acetylation; K638 was first replaced with alanine to examine whether this lysine residue was essential to yeast survival and growth, and then K638 was replaced by glutamine or arginine to mimic acetylated or un-acetylatable Aco1p, respectively. Similar work was performed for proteins Rpl31ap, Sin3p, and Hsp104p, whose function was confirmed in this study to be required for yeast innate and acquired thermotolerance. Work is in progress to examine if these manipulations will impact yeast thermotolerance. To conclude, this study identified that 32 protein with changing acetylation are required for yeast innate thermotolerance. In addition, this work generated mutant strains 4 harboring desired residues that are useful to examine the role of specific lysine residues whose acetylation may play a role in regulating the yeast heat stress responses

    The POA Application in the Teaching of Chinese Writing as a Foreign Language

    Get PDF
    The old style of traditional teaching mode is to take “text as the core”. A whole new teaching approach was put forward by a famous professor and the approach is named as production oriented approach (POA), which is pulling people’s attention on both “input” and “production”. From the essence of these two elements, we know that it is for sure the innovation and development of traditional teaching mode is great. Also, there is also a disjointed situation between learning and using during the teaching. Based on the POA, this paper selects lesson 25 “developing Chinese intermediate Writing II” as an example, expects to explore the effective way of production so as to provide a series of reference for the subsequent writing teaching design from three aspects including driving, facilitating and evaluating

    RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective Augmentation

    Full text link
    Retrieving documents and prepending them in-context at inference time improves performance of language model (LMs) on a wide range of tasks. However, these documents, often spanning hundreds of words, make inference substantially more expensive. We propose compressing the retrieved documents into textual summaries prior to in-context integration. This not only reduces the computational costs but also relieves the burden of LMs to identify relevant information in long retrieved documents. We present two compressors -- an extractive compressor which selects useful sentences from retrieved documents and an abstractive compressor which generates summaries by synthesizing information from multiple documents. Both compressors are trained to improve LMs' performance on end tasks when the generated summaries are prepended to the LMs' input, while keeping the summary concise.If the retrieved documents are irrelevant to the input or offer no additional information to LM, our compressor can return an empty string, implementing selective augmentation.We evaluate our approach on language modeling task and open domain question answering task. We achieve a compression rate of as low as 6% with minimal loss in performance for both tasks, significantly outperforming the off-the-shelf summarization models. We show that our compressors trained for one LM can transfer to other LMs on the language modeling task and provide summaries largely faithful to the retrieved documents

    Embedding Uncertain Knowledge Graphs

    Full text link
    Embedding models for deterministic Knowledge Graphs (KG) have been extensively studied, with the purpose of capturing latent semantic relations between entities and incorporating the structured knowledge into machine learning. However, there are many KGs that model uncertain knowledge, which typically model the inherent uncertainty of relations facts with a confidence score, and embedding such uncertain knowledge represents an unresolved challenge. The capturing of uncertain knowledge will benefit many knowledge-driven applications such as question answering and semantic search by providing more natural characterization of the knowledge. In this paper, we propose a novel uncertain KG embedding model UKGE, which aims to preserve both structural and uncertainty information of relation facts in the embedding space. Unlike previous models that characterize relation facts with binary classification techniques, UKGE learns embeddings according to the confidence scores of uncertain relation facts. To further enhance the precision of UKGE, we also introduce probabilistic soft logic to infer confidence scores for unseen relation facts during training. We propose and evaluate two variants of UKGE based on different learning objectives. Experiments are conducted on three real-world uncertain KGs via three tasks, i.e. confidence prediction, relation fact ranking, and relation fact classification. UKGE shows effectiveness in capturing uncertain knowledge by achieving promising results on these tasks, and consistently outperforms baselines on these tasks
    • …
    corecore